首页> 外文OA文献 >Measures and Limits of Models of Fixation Selection
【2h】

Measures and Limits of Models of Fixation Selection

机译:注视选​​择模型的措施和局限性

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection . We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.
机译:注视选​​择的模型是寻求了解人脑如何选择相关信息的主要工具。在评估相互竞争的主张时使用此工具通常需要比较不同模型在预测眼动方面的相对性能。但是,研究使用了性能明显不同的各种各样的性能指标,这使得比较变得困难。我们对这方面的研究做出了三点主要贡献:首先,我们争辩一组理想的属性,审查常用的度量,然后得出结论,没有单一的度量可以统一所有理想的属性。但是,ROC曲线(分类度量)和KL-散度(概率分布的距离度量)下的面积结合了许多理想的属性,并可以对关键模型的性能进行有意义的比较。我们提供了ROC度量相对于主题平均的线性度的分析证明,并在眼动数据的背景下展示了针对小样本量的基于熵的度量(如KL散度)的适当校正。其次,我们基于注视数据的图像无关属性以及主题之间的一致性分别提供了这些度量的下限和上限。基于这些界限,有可能提供一个参考框架来判断注视选择模型的预测能力。我们提供了开源python代码来计算参考框架。第三,我们表明,主题一致性边界之间的上限仅适用于预测主题人口平均值的模型。与此不同的是,我们表明结合特定对象的观看行为可以生成超出该上限的预测。综上所述,这些发现提供了必要的信息,可以对任何固定选择模型的质量做出有根据的判断,因此在引入新模型时应进行报告。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号